Search Results: "Sam Hartman"

10 December 2010

Sam Hartman: Privacy

I attended a workshop sponsored by the IAB, W3C, ISOC and MIT on Internet Privacy. The workshop had much more of a web focus than it should have: the web is quite important should certainly cover a majority of the time, but backend issues, network issues, and mobile applications are certainly important too. For me this workshop was an excellent place to think about linkability and correlation of information. When people describe attacks such as using the ordered list of fonts installed in a web browser to distinguish one person from another, it s all too easy to dismiss people who want to solve that attack as the privacy fringe. Who cares if someone knows my IP address or what fonts I use? The problem is that computers are very good at putting data together. If you log into a web site once, and then later come back to that same website, it s relatively easy to fingerprint your browser and determine that it is the same computer. There s enough information that even if you use private browsing mode, clear your cookies and move IP addresses, it s relatively easy to perform this sort of linking. It s important to realize that partially fixing this sort of issue will make it take longer to link two things with certainty, but tends not to actually help in the long-run. Consider the font issue. If your browser returns the set of fonts it has in the order they are installed, then that provides a lot of information. Your fingerprint will look the same as people who took the same OS updates, browser updates and installed the same additional fonts in exactly the same order as you. Let s say that the probability that someone has the same font fingerprint as you is one in a million. For a lot of websites that s enough that you could very quickly be linked. Sorting the list of fonts reduces the information; in that case, let s say your probability of having the same font set as someone else is one in a hundred. The website gets much less information from the fonts. However it can combine that information with timing information etc. It can immediately rule out all the people who have a different font profile. However as all the other people who have the same font fingerprint access the website over time, differences between them and you will continue to rule them out until eventually you are left. Obviously this is at a high level. One important high-level note is that you can t fix these sorts of fingerprinting issues on your own; trying makes things far worse. If you re the only one whose browser doesn t give out a font list at all, then it s really easy to identify you. The big question in my mind now is how much do we care about this linking. Governments have the technology to do a lot with linking. We don t have anything we technical we can do to stop them, so we ll need to handle that with laws. Large companies like Google, Facebook and our ISPs are also in a good position to take significant advantage of linking. Again, though, these companies can be regulated; technology will play a part, especially in telling them what we re comfortable with and what we re not, but most users will not need to physically prevent Google and Facebook from linking their data. However smaller websites are under a lot less supervision than the large companies. Unless you take significant steps, such a website can link all your activities on that website. Also, if any group of websites in that space want to share information, they can link across the websites. I d like to run thought experiments to understand how bad this is. I d like to come up with examples of things that people share with small websites but don t want linked together or alternatively don t want linked back to their identity. Then look at how this information could be linked. However, I m having trouble with these thought experiments because I m just not very privacy minded. I can t think of something that I share on the web that I wouldn t link directly to my primary identity. I certainly can t find anything concrete enough to be able to evaluate how clearly I care to protect it. Helping me out here would be appreciated; if you can think of fairly specific examples. There s lots of important I prefer to keep private like credit card numbers, but there, it s not about linking at all. I can reasonably assume that the person I m giving my credit card number to has a desire to respect my privacy.a

3 December 2010

Sam Hartman: Bad Hair Day for Kerberos

Tuesday, MIT Kerberos had a bad hair day one of those days where you re looking through your hair and realize that it s turned to Medusa s snakes while you weren t looking. Apparently, since the introduction of RC4, MIT Kerberos has had significant problems handling checksums. Recall that when Kerberos talks about checksums it s conflating two things: unkeyed checksums like SHA-1 and message authentication codes like HMAC-SHA1 used with an AES key derivation. The protocol doesn t have a well defined concept of an unkeyed checksum, although it does have the concept of checksums like CRC32 that ignore their keys and can be modified by an attacker. One way of looking at it is that checksums were over-abstracted and generalized. Around the time that 3DES was introduced, there was a belief that we d have a generalized mechanism for introducing new crypto systems. By the time RFC 3961 actually got written, we d realized that we could not abstract things quite as far as we d done for 3DES. The code however was written as part of adding 3DES support. There are two major classes of problem. The first is that the 3DES (and I believe AES) checksums don t actually depend on the crypto system: they re just HMACs. They do end up needing to perform encryption operations as part of key derivation. However the code permitted these checksums to be used with any key, not just the kind of key that was intended. In a nice abstract way, the operations of the crypto system associated with the key were used rather than those of the crypto system loosely associated with the checksum. I guess that s good: feeding a 128-bit key into 3DES might kind of confuse 3DES which expects a 168-bit key. On the other hand, RC4 has a block size of 1 because it is a stream cipher. For various reasons, that means that regardless of what RC4 key you start with, if you use the 3DES checksum with that key, there are only 256 possible outpus for the HMAC. Sadly, that s not a lot of work for the attacker. To make matters worse, one of the common interfaces for choosing the right checksum to use was to enumerate through the set of available checksums and pick the first one that would accept the kind of key in question. Unfortunately, 3DES came before RC4 and there are some cases where the wrong checksum would be used. Another serious set of problems stems from the handling of unkeyed checksums. It s important to check and make sure that a received checksum is keyed if you are in a context where an attacker could have modified it. Using an md5 outside of encrypted text to integrity protect a message doesn t make sense. Some of the code was not good about checking this. What worries me most about this set of issues is how many new vulnerabilities were introduced recently. The set of things you can do with 1.6 based on these errors was significant, but not nearly as impressive as 1.7. A whole new set of attacks were added for the 1.8 release. In my mind, the most serious attack was added for the 1.7 release. A remote attacker can send an integrity-protected GSS-API token using an unkeyed checksum. Since there s no key the attacker doesn t need to worry about not knowing it. However the checksum verifies, and the code is happy to go forward. I think we need to take a close look at how we got here and what went wrong. The fact that multiple future releases made the problem worse made it clear that we produced a set of APIs where doing the worng thing is easier than doing the right thing. It seems like there is something important to fix here about our existing APIs and documentation. It might be possible to add tests or things to look for when adding new crypto systems. However I also think there is an important lesson to take away at a design level. Right now I don t know what the answers our, but I encourage the community to think closely about this issue. I m speaking about MIT Kerberos because I m familiar with the details there. However it s my understanding that the entire Kerberos community has been thinking about checksums lately, and MIT is not the only implementation with improvements to make here.

30 November 2010

Sam Hartman: Implementation Progress

At the end of September, things were quite exciting as we had our first project meeting. At that meeting those in the room saw a demonstration of the Moonshot GSS EAP mechanism and we discussed a number of open issues and began to plan for our test infrastructure. We ve made significant progress on the specification front and on explaining Moonshot to important communities since then. However there has been little public progress on the implementation front. Unfortunately, getting the necessary legal clearance and agreements to release code often takes longer than anyone would like; that is what is happening here. We re all eagerly awaiting final approval from the lawyers and JANET(UK) management. However, things have been moving behind the scenes. Throughout much of October, Luke Howard and Linus Nordberg were working on their respective parts of the code. I ve also been working on putting together the test and build infrastructure. As we discussed at the meeting, we re going to use Debian and Ubuntu as the basis for our testing. For example, we hope to release virtual machine images for these platforms for the major Moonshot components. Thus the primary build environment for our testing and virtualization will be for Debian. I ve been putting together that here. Right now, that branch will pull together packages of the SAML infrastructure that we need. I ve also been looking into virtualized test frameworks and believe I ve found one that meets our needs. I ve also put together some primitive build infrastructure that is independent of packaging available here. I ve set up a buildbot that builds both environments. So, as the code becomes available we ll be in a good position to start making it available.

Sam Hartman: Abfab at IETF 79

The ABFAB working group, which will be standardizing technologies that Moonshot depends on, had its first meeting at IETF 79 in Beijing, China. The meeting was quite productive. Because the meeting was the first of the working group, there were some introductory presentations. A group of authors are putting together a proposed architecture document; we presented the current state of our work. However things have evolved significantly since the working group meeting and I think it will make more sense to wait a couple of weeks to discuss the architecture document. Most of the time was spent on two presentations. The first was the status of the GSS mechanism. We discussed issues that were discovered while implementing the EAP GSS-API mechanism. Discussion in the room tended to support the proposals made in the slides. A few issues will need to come to the list. We had the most interesting discussion of SAML AAA integration. Minutes are available.

18 November 2010

Raphaël Hertzog: 4 tips to maintain a 3.0 (quilt) Debian source package in a VCS

Most Debian packages are managed with a version control system (VCS) like git, subversion, bazaar or mercurial. The particularities of the 3.0 (quilt) source format are not without consequences in terms of integration with the VCS. I ll give you some tips to have a smoother experience. All the samples given in the article assume that you use git as version control system. 1. Add .pc to the VCS ignore list .pc is the directory used by quilt to store its internal data (list of applied patches, backup of modified files). It s also created by dpkg-source so that quilt knows that the patches are in debian/patches (and not in patches which is the default directory used by quilt). For that reason, the directory is kept even if you unapply all the patches. However you don t want to store this directory in your repository, so it s best to put it in the VCS ignore list. With git you simply do:
$ echo ".pc" >>.gitignore
$ git add .gitignore
$ git commit -m "Ignore quilt dir"
The .gitignore file is ignored by dpkg-source, so you re not adding any noise to the generated source package. 2. Unapply patches after the build If you store upstream sources with non-applied patches (most people do), and if you don t build packages in a temporary build directory, then you probably want to unapply the patches after the build so that your repository is again in a clean status. You can do this by adding unapply-patches to debian/source/local-options:
$ echo "unapply-patches" >>debian/source/local-options
$ git add debian/source/local-options
$ git commit -m "Unapply patches after build"
svn-buildpackage always builds in a temporary directory so the repository is left exactly like it was before the build, this option is thus useless. git-buildpackage can also be told to build in a temporary directory with --git-export-dir=../build-area/ (the directory ../build-area/ is the one used by svn-buildpackage, so this option makes git-buildpackage behave like svn-buildpackage in that respect). 3. Manage your quilt patches as a git branch Instead of using quilt to manage the Debian-specific patches, it s possible to use git itself. git-buildpackage comes with gbp-pq ( Git-BuildPackage Patch Queue ): it can export the quilt serie in a git branch that you can manipulate like you want. Each commit represents a patch, so you want to rebase that branch to edit intermediary commits. Check out the upstream documentation of this tool to learn how to work with it. There s an alternative tool as well: git-dpm. Its website explains the principle very well. It s a more complicated than gbp-pq but it has the advantage of keeping the history of all branches used to generate the quilt series of all Debian releases. You might want to read a review made by Sam Hartman, it explains the limits of this tool. 4. Document how to review the changes One of the main benefit of this new source format is that it s easy to review changes because upstream changes are kept as separate patches properly documented (ideally using the DEP-3 format). With the tools above, the commit message becomes the patch header. Thus it s important to write meaningful commit messages. This works well as long as your workflow considers the Debian patches as a branch that you rebase on top of the upstream sources at each release. Some maintainers don t like this workflow and prefer to have the Debian changes applied directly in the packaging branch. They switch to a new upstream version by merging it in their packaging branch. In that case, it s difficult to generate a quilt serie out of the VCS. Instead, you should instruct dpkg-source to store all the changes in a single patch (which is then similar to the good old .diff.gz) and document in the header of that patch how the changes can be better reviewed, for example in the VCS web interface. You do the former with the --single-debian-patch option and the latter by writing the header in debian/source/patch-header:
$ echo "single-debian-patch" >> debian/source/local-options
$ cat >debian/source/patch-header <<END
This patch contains all the Debian-specific
changes mixed together. To review them
separately, please inspect the VCS history
at http://git.debian.org/?=collab-maint/foo.git
<put details="details" here="here" more="more">
END
Subscribe to this blog by RSS, by email or on Facebook.

11 comments Liked this article? Click here. My blog is Flattr-enabled.

2 November 2010

Sam Hartman: Review of Git DPM

I ve been looking at git dpm. It s a tool for managing Debian packages in git. The description promises the world: use of git to maintain both a set of patches to some upstream software along with the history of those patches both at the same time. It also promises the ability to let me share my development repositories. The overhead is much less than topgit. My feelings are mixed. It does deliver on its promise to allow you to maintain your upstream patches and to use git normallyish. It produces nice quilt patch series. In order to understand the down side, it s necessary to understand a bit about how it works. It maintains your normal branch roughly the way you do things today. There s also a branch that is rebased often that is effectively the quilt series. It starts from the upstream and has a commit for every patch in your quilt series containing exactly the contents of that patch. This branch is merged into your debian branch when you run git dpm update-patches. The down side is that it makes the debian branch kind of fragile. If you commit an upstream change to the Debian branch it will be reverted by the next git dpm update-patches. The history will not be lost, but unless you have turned that change into a commit along your patches branch, git dpm update-patches will remove the change without warning. This can be particularly surprising if the change in question is a merge from the upstream branch. In that case, the merged in changes will all be lost, but since the tip of the upstream branch is in your history, then future merges will not bring them back. If you either also merge your changes into the patches branch, or tell git dpm about a new upstream, then the changes will reappear at the next git dpm update-patches. The other fragility is that rebasing your debian branch can have really unpleasant consequences. The problem is that rebase removes merge commits: it assumes that the only changes introduced by merge commits are resolution of conflicts. However, git dpm synthesizes merge commits. In particular, it takes the debian directory from your debian branch, plus the upstream sources from your upstream branch, plus all the patches you applied and calls that your new debian branch. There s also a metadata file that contains pointers to the upstream branch and the quilt patch series branch. In addition, debian/patches gets populated. Discarding this commit completely would simply roll you back to the previous state of your patches. However, this loses history! There is no reference that is left pointing to the patches branch; the only reason it is still referenced is that it s a parent of the commit rebase is throwing away. This isn t really all that unusual; if you merged in any branch, deleted the branch and rebased away both the merge and the commits from the branch, you d lose history. I do find it a bit easier to do with git dpm. However, it s more likely that you ll manage to rebase and keep the commits from the patches branch. In particular, if the horizon of your rebase includes a merge of the patches branch,then for every patch that involves changes to the debian branch from the point where rebase starts rewriting history, you will see a commit included by your rebase. My suspicion is that this commit is more likely to generate conflicts than usual because it s a patch on top of the upstream sources being applied to a partially patched debian branch. However I have not fully worked through things: it may be that such a patch is exactly as likely to cause conflicts as it would in a git dpm update-patches operation. If that succeeds and you do not manage to fail to pick one of these commits, you ll end up with roughly the same debian sources you started with. There are two key exceptions: the git dpm metadata file is not updated and your debian/patches is not consistent with your sources. The first means you re in the state of the previous paragraph: git dpm update-patches will blow away your changes. The second means that any source package you produce is going to be kind of funky if not out right broken. It is possible to recover from git dpm plus rebase. I think if you git dpm checkout-patches, then cherry pick the patch related commits from your debian branch that were introduced by the rebase, then git dpm update-patches, you ll be in a sane state. Obviously you could also recover by finding the pre-rebase commit and resetting there. I m assuming though that you actually needed to rebase for some reason. Of course git dpm does involve a space penalty. You generate a commit for every patch in your quilt series for every version of the upstream sources you deal with. Also, you will introduce and store every version of the quilt series directly in your debian directory. The extra commits probably aren t a big deal: the blobs probably delta compress well. I m not sure though that the quilt patch blobs will compress well with other things. In my opinion the ability to get nice quilt patches is well worth the space penalty. In conclusion, git dpm is a sharp tool. Actually, I think that s understating things. Git dpm involves a bunch of rotating blades. You stick your face and foot between the blades, and if you use it correctly, there s exactly enough clearance that nothing gets damaged and you get some fairly neat results. Other outcomes are available. I m still trying to evaluate if it is worth it. I think the tradeoff might be different for something like the krb5 package which is maintained by two very experienced maintainers than say for Shibboleth which seems to involve several more maintainers. I m not sure that there is much git dpm can do to make things better. Even detecting loss-of-foot events might be kind of tricky.

28 October 2010

Sam Hartman: Moonshot at Kerberos

At The MIT Kerberos Consortium s 2010 conference, Josh Howlett and Sam Hartman delivered a talk on Moonshot. Slides should be up in a day or so. We reported on status and gave a brief overview. The new material was apropos for the venue. At the bar BOF back in March at IETF 77, we received several comments on Moonshot s limitations. It doesn t work well for services that require rapid authentications for multiple requests. There s not a good story for use when a Moonshot service needs to contact another service. There isn t a good standardized mechanism for mapping in domain-specific policy. We presented a proposal that Luke and Sam developed to optionally provide a Kerberos ticket as part of moonshot authentication. This scales from a service that simply generates its own service tickets all the way through resource domains that have many services and complex policy and provide the client a TGT. Clients can implement the feature in order to achieve better performance. Server can implement the feature in order to get delegation support within a resource domain and to get policy mapping. Luke has prototyped a version of this service involving a service ticket. We plan on briefly mentioning a desire to have extensible fast reauthentication support at the ABFAB meeting in IETF 79. However in the interest of getting the working group off to a good start we re going to focus on the well understand parts of the system and formally propose this extension after IETF 79.

13 October 2010

Sam Hartman: ABFAB working group approved

Yesterday, the Application Bridging for Federated Authentication working group was approved in the IETF. This working group s charter includes the IETF technologies needed by Project Moonshot. The group will meet at IETF 79 in Beijing this November. Meanwhile, at last month s Moonshot meeting in Copenhagen, an initial version of the technology was demonstrated. We re still working through some of the administrative details needed before we can release the code for public review. There have been several exciting discussions both on the Moonshot implementation list and on the ABFAB list over the past few weeks.

1 June 2010

Sam Hartman: Moonshot at TNC2010

Moonshot is being discussed at the TERENA TNC 2010 conference. Our session started at 08:00 UTC (a few minutes ago), but will be going on for around the next hour or so. There is a presentation before Moonshot, but then Josh is up. See here for streaming and the Moonshot web site for our updated specifications. When the session is archived I ll post a pointer to the video stream.a

18 May 2010

Sam Hartman: Moonshot: Federated Authentication Beyond the Web

Recently, I ve been working on an exciting project with JaNet(UK) on a project to bring federated authentication to non-web applications. I ve worked on authentication projects a lot, although this is the first federation project I ve worked on. The big difference appears to be an emphasis on credential independence: the subject (person trying to authenticate) and service will not share a common credential type. Within their organization, the subject and their identity provider share a credential. Then, the federation has some credential mechanism such that the user s organization and the service share some (probably completely different) type of credential. The other emphasis is on providing personalization. For web applications, there are a lot of options to achieve this: Information Card, Open ID, SAML, and OAuth all provide solutions in this space. However there are not good options for non-web applications. If you out-source your mail and chat infrastructure but want to use your own chat client or IMAP client, then you will not get the same federation benefits you can get with the web. If you re using usernames and passwords and don t mind the potential problems with your out-sourcing provider being able to impersonate all your users, you can simply synchronize usernames and passwords. Within an enterprise, you can do better using Kerberos. JaNet(UK) runs the UK Access Federation, which is a SAML-based web single-sign-on federation. In order to better meet the needs of their customers they d like to expand this offering to non-web applications. This demand is apparently shared across the European academic community. I suspect there is also some demand in the US academic community and in enterprise situations. With the web, it turns out that you have a convenient platform for interactions with the identity provider: you can simply direct a web browser to the identity provider and need not specify the user interaction with the web browser at all. This is seen as a significant branding and usability advantage. With other environments, it becomes necessary to specify the interaction with the identity provider. Consider an automated client that wishes to examine a mail box and provide advanced mail sorting or aggregation. That automated client cannot directly use a web browser. OAuth solves this issue with an enrollment step that does typically involve a web browser that produces a consumer key and an authentication step that does not. However for non-web clients it seems like avoiding reliance on the browser authentication will be important. It turns out we already have widely used technologies that do this: the Extensible Authentication Protocol (EAP) mediates the interaction between a subject and identity provider for obtaining network access. It also turns out that we have fairly good technologies for abstract security services within non-web applications: thanks to Kerberos and Active Director, many application protocols and a fair number of applications support GSS-API. JaNet(UK) proposes to combine these technologies with SAML in order to produce a solution for federation beyond the web. I prepared a feasibility analysis of this proposal. At a technical level, the proposal is sound. There s a lot of standardization and implementation work, but there appears to be sufficient motivation to form the seeds of a standards activity and put together a proof-of-concept implementation. However, the big question is Will anyone use it? In particular, to be useful beyond fairly small communities, support from client vendors and application framework vendors will be needed. It s taken massive money and around 20 years to get Kerberos support to a point where it is effective within an enterprise. Moonshot can leverage that work to a large extent, but moonshot may also have greater usability and penetration goals. It s interesting that I m advocating EAP for application layer authentication. When I was a Security AD, I made a strong statement that EAP must only be used for network access. I ve been fairly consistent about that since then. I think there are two huge problems with using EAP for application authentication. The first is that EAP only authenticates the home realm; it does not authenticate what service you re going to. So you might try to connect to your e-mail and end up giving something access to your stored files and pictures instead. That is, EAP has a phishing exposure in the federated context. If the only thing you can get by using EAP is network access, that exposure is only moderate. However in a fully federated environment that is a huge exposure. Moonshot will address this problem by using EAP channel binding and by doing the necessary work to make that a viable solution. The second concern is that interoperability is reduced when you have multiple authentication approaches for the same problem. If EAP is going to be used for application authentication, we need to understand how it relates to the rest of the application authentication metasystem. Moonshot proposes such a relationship, addressing my objection. Moonshot is designed to work well with the objectives of the Identity Metasystem and its laws of identity. It uses a different technology, but does have an approach for dealing with claims-based identity and hopefully will have a user experience very similar to the identity metasystem. It uses a different underlying technology. However one of the main beliefs behind the identity metasystem is that is the user experience and universal interoperability that is important, not any specific technology. In its domain, the technologies Moonshot selects will be a better fit than a web services stack. It s strange not to be working on Kerberos; Moonshot uses some Kerberos technology, but its core is definitely not today s Kerberos. In some ways it is fun to be working on something new. There s one aspect of Kerberos I really miss: Moonshot has nothing like tickets. There s no place to remember state or exchange to directly involve the client in what the server learns. My analysis talks about ways to make Moonshot more like Kerberos; there are some potential advantages, but so far, the tradeoffs do not justify changes. We re hoping to have a bar BOF at IETF 77 and a BOF in the summer at IETF 78.

Sam Hartman: Moonshot Bar BOF Thursday March 20 at 9 PM; specs available

At IETF 77, we re having a get together to discuss federated authentication beyond the web. The meeting will be in the Mahattan room starting at 9 PM US Pacific time. I think audio streaming will be available; I will post a link closer to the meeting time. In the last entry, I mentioned that a preliminary spec would be available; see the preliminary EAP GSS-API mechanism. A use case paper and slide set are being reviewed internally and should be ready early next week. We may even have preliminary versions of the binding between RADIUS and SAML available before IETF. There have been a number of great discussions on the moonshot-community list and with others interested in the broader area.

Sam Hartman: Two SASL mechanisms for Federated Authentication

There are two other approaches that are likely to come up tonight; see this message for details. These mechanisms require significantly lower infrastructure than Moonshot, but do not provide all the benefits. One question is whether there is a continuum of use-cases depending on what level of investment in client changes are made.

Sam Hartman: Internet2 Moonshot Briefing Paper

Please see here for a briefing paper including snapshots of all our specs as well as an updated use case paper. This paper was presented at the end of April at the Internet2 Spring Members meeting. This is a great snapshot of Project Moonshot at the end of last month.

28 April 2010

Sam Hartman: Moonshot at Internet2

Monday morning, Project Moonshot was presented to the US networking research community at the Internet2 spring members meeting. Our presentation was well received. We presented an updating briefing paper as well as much of the same material presented earlier at IETF. We re moving forward to the planning phase for our standardization and implementation efforts. If you would be interested in getting involved in this exciting federated authentication project, please let us know.

Sam Hartman: Debconf 10 Enterprise Track

I ve been asked by the Debconf 10 talks team to coordinate a Debian Enterprise track at the upcoming Debconf 10 conference. I m really excited about this, and I could use your help. From my standpoint, this all started with a BOF proposal looking at better coordination and what is missing in Debian enterprise integration. The track will also include talks and other events on what exciting things are happening with Debian in the enterprise. I need your help in the form of suggestions for talks, panels and the like (especially if you would be willing to give the talk or coordinate an activity). We re under a fairly tight deadline; ideally proposals would be in by May 1, but if they are not, it s still definitely worth discussing with me. From my standpoint, topics include: Obviously, many other things could fit in and I d be interested in your ideas as well.

26 March 2010

Sam Hartman: Slides for Bar BOF

Here is a pointer for slides for tonight s bar bof. It s likely that we will only be using the diagram slide.moonshot-ietf77-01

25 March 2010

Sam Hartman: Federated Authentication discussion tonight at 9 PM Pacific

The federated authentication bar BOF will be held tonight at 9 PM US Pacific time in the Manhattan room at the IETF 77 meeting.. Here is information for participation. Reading List Remote Participation
  • Join our mailing list
  • 11 March 2010

    Sam Hartman: Kerberos 1.8: Anonymous and the Cloud

    The Kerberos team recently released Kerberos 5 1.8. This is the first of a couple of posts talking about features in the new release and how they significantly enhance what you can do with Kerberos. Before I get to that though, I d like to wax excited for a moment on the development process. There is much more of a community actively involved in the development process. As with the last release, MIT, Painless Security and PADL Software made contributions along with a number of others.. However the biggest change is the number of parties actively working with each other on designs, design reviews, testing and debugging. There was also a lot more real-time collaboration. It was great to see people from Sun, Debian and Redhat all actively bringing their prospectives to the discussion. My thanks to the Kerberos Consortium for pulling everyone together and for livening up the development process. Kerberos 1.8 testing releases are already available in Debian Squeeze and Ubuntu Lucid. I will be updating Debian to the final release soon, but everything discussed here should already work in both Debian and Ubuntu. I don t know about the state of other distributions, although given how heavily Redhat was involved in the process, I m sure they have 1.8 internally. One of the frustrating problems with previous versions of Kerberos was the need to key hosts before they could run Kerberized services. An administrator needed to set up a keytab and securely get it on the machine. That creates problems for automated installs of services, virtual services in the cloud, and environments where people installing servers are not the same as those running the Kerberos realm. Kerberos 1.8 still requires servers be keyed, but the need for the administrator is removed. Anonymous Kerberos provides a way for a machine to authenticate to Kerberos without an existing account. That page shows how the Kerberos administration server can be configured to permit machines to create their own keytabs. Anonymous Kerberos does require pkinit be configured and that the client know the public key of the KDC. However it is easy to build the KDC public key into an auto installer image or place it onto a USB key. I think it would be really neat to build a Debian image for Amazon EC2 that would show how easy it is to boot a virtual machine, have it register itself with a Kerberos realm, use something like remctl to request a work load and then begin serving that work load. The work load could include both clients for distributed computation or even services provided to the world, all secured by Kerberos with automatic bootstrapping. I don t know if I ll have time to put this together, but if someone were interested in helping or paying for the work it would be much more likely to happen. I believe the links above are enough that you should be able to get Anonymous Kerberos working and minimally configured. If not, feel free to send questions; I ll focus more on updating the public instructions than on providing individual help, but I m definitely interested in making this easy to use.

    14 January 2010

    Sam Hartman: Open Source Accounting Software

    I m developing a huge backlog of things to write about, going back as far as a couple of posts inspired by the Kerberos conference in October. However, those require more effort putting together the post, so I m going to focus on something more recent. Part of running a business even a small business like Painless Security is dealing with the administrative and bookkeeping. The normal solution seems to be Intuit s Quick Books. When I set up the company, I looked into that. However, Intuit s accessibility story starts with give up, and goes down hill from there. Apparently, screen reader vendors have offered to work with Intuit to help them, but have been turned down, or at least that s the strong implication I get from reading blogs from these accessibility vendors. So, I d definitely rather not give Intuit my money. I m also looking for a bit more than the minimum in accounting software. Keeping the books in order enough to pay taxes would be easy. However I want to be able to understand where I m making money; I want to be able to figure out if the fixed price contracts I enter into end up being good ideas. I want to understand how expensive community projects Painless Security gets involved in like Debian and IETF work are both in terms of direct costs and opportunity costs. I want to understand what sorts of work ends up being the most profitable. All of these are fairly typical management questions; solutions to varying degrees are understood. however it means I m actually going to use an accounting product more than just to track my invoices and prepare taxes. I decided to see what the open source world was like in this space. I started with Ledger SMB. Ledger SMB s main claim to fame is that it tries to be better than SQL Ledger. Being better than SQl Ledger is definitely a good thing. It worked well enough to generate invoices and income statements. It nominally had facilities to track the sort of per-project information I m looking for, but the facilities are not near good enough. Also, there were some issues things like the fact that total debits didn t particularly need or tend to sum to total credits got old after a very short while. Also, facilities for correcting mistakes were unfortunate. You could either operate in a mode where you could delete a transaction, or a mode where you reverse transactions. Deleting transactions in practice meant deleting most of a transaction; balances became out of sync and half of the transaction tended to stick around using some interfaces but not other interfaces. Database cleanup was almost always required. In principle, reversals are better. However, there is no facility to indicate that you re not expecting payment on a reversed invoice. The receivables/payables account balance out because of the reversing transaction, but both the reversing and reversed transaction end up becoming over-due. Hey, can you please send me some anti-money to clean this up? The code was dreadful; the goal was to have better abstraction than SQL Ledger. Perhaps that was achieved, but man that leaves a lot to desire. Now, I m playing with Open ERP. It wants to grow up some day to be a competitor to SAP R3. In a way that sounds good, although it does mean there s a high complexity cost. There s a lot of functionality. There s reasonably good separation between view and model (and possibly even controller). The code is often clean, although the random blocks of commented out code with no explanation cause me to cringe. Many aspects of the system seem incredibly well designed; you can approach the system through a web client, graphical client, two different RPC mechanisms, XL and Open Office plugins. However, there s this mix of missing stuff and completely broken that causes me to wonder whether I m going to be sad. First, there appears to be basically no support for the US. All addresses are European format and are hard coded. Each individual report needs to be changed. Recently, I found cases where as best I can tell debits and credits are just reversed. I ve found a case where backing up the database succeeds but generates a zero length file. I almost lost data through that last one. There s a complex set of mechanisms to deal with units-of-measure for products for example, some jobs billed in hours, some are billed in days. However other parts of the code just add quantities. Playing around with this I am reminded again that I really enjoy thinking about these sorts of problems. An interesting ERP project could be fun to work on. for example, I bet handling ERP needs for some cloud-centered company would be a lot of fun. Also, as part of looking at Open ERP, I ve more or less developed the necessary scripts to migrate a simple service company from Ledger SMB to Open ERP. There are limitations of course, but if this would be useful to you, drop me a line.

    23 October 2009

    Sam Hartman: The Fate of the Three Little Pigs and Little Red Riding Hood

    One of the more annoying aspects to deploying Kerberos and GSS-API is making sure that clients have the correct name for the server they re talking to. CIFS, the Windows file-sharing protocol, provided the identity of the server to the client. Windows used this to make a few things easier with NTLM but does not use this information with Kerberos. I keep finding myself in conversations where someone has the bright idea of making this problem easier by generalizing this mechanism and having the server tell the client its identity. The client can then authenticate to that identity. There s definitely an implementation advantage: you remove all the complexity of name mapping. The problem of course is that it matters what server the client is talking to; the client actually needs to make a decision about how much to trust the server. Authentication to the bad guy is just as bad as the bad guy being able to subvirt your authentication to the good guy. I was talking recently to another implementor who has similar experience with their customers. Frustrated, I was looking for an analogy simple enough that people could understand the mistake here. I was running through nursery rhymes and other childrens tales in may head until I came to the Three Little Pigs. It s perfect!
    The pigs do not want the results of the Let me in, service from the big bad wolf. There s no way that the situation could be made better by more authentication. Asking the wolf for his PIV Card (when have you met a big bad wolf who was not a federal contractor) will not help the pigs decide to let the wolf in. Because the pigs think to consider who they are getting a service from, they decide not to trust it. Of course, physical security is something that first two pigs should have worked on. However even their brother would not have been safe if he d taken an approach similar to the one proposed for GSS-API of asking the bad guy what name to trust before authenticating to see if the bad guy in fact has that name. There may be things we can do to make name mapping easier. However we cannot provide security without making a trust decision about who we talk to, and whenever we talk about who and trust together, we must consider the security of the mapping. Of course, as Little Red Riding Hood could tell you, sometimes it is all about authentication. Sadly, her grandmother was unable to get better than level of assurance 1 for her identity as Little Red Riding Hood s grandmother, and the wolf was able to claim that identify for himself.

    Next.

    Previous.